44 research outputs found

    The Impact of Profiling Versus Static Analysis in Precision Tuning

    Get PDF
    Approximate computing techniques, such as precision tuning, are widely recognized as key enablers for the next generation of computing systems, where computation quality metrics play an important role. In precision tuning, a trade-off between the accuracy of computations and latency (and/or energy) is established, but identifying the opportunities for applying this approximate computing technique is often challenging. In this article, we compare two different approaches - worst-case static annotation and profile-guided annotation - and their implications when used in a precision tuning framework. To ensure a fair comparison, we implement the profile-guided approach in an existing tool, TAFFO, and experimentally compare it to the original static approach used by the tool. We validate our considerations using the well-known PolyBench/C benchmark suite, and two real-world application case studies. Our findings demonstrate that the profile-guided approach, fed with reasonable profiling data, in addition to needing less expertise to employ, delivers comparable speedup and better accuracy than the static approach

    Measuring Information Leakage in Website Fingerprinting Attacks and Defenses

    Full text link
    Tor provides low-latency anonymous and uncensored network access against a local or network adversary. Due to the design choice to minimize traffic overhead (and increase the pool of potential users) Tor allows some information about the client's connections to leak. Attacks using (features extracted from) this information to infer the website a user visits are called Website Fingerprinting (WF) attacks. We develop a methodology and tools to measure the amount of leaked information about a website. We apply this tool to a comprehensive set of features extracted from a large set of websites and WF defense mechanisms, allowing us to make more fine-grained observations about WF attacks and defenses.Comment: In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS '18

    TAFFO: The compiler-based precision tuner

    Get PDF
    We present taffo, a framework that automatically performs precision tuning to exploit the performance/accuracy trade-off. In order to avoid expensive dynamic analyses, taffo leverages programmer annotations which encapsulate domain knowledge about the conditions under which the software being optimized will run. As a result, taffo is easy to use and provides state-of-the-art optimization efficacy in a variety of hardware configurations and application domains. We provide guidelines for the effective exploitation of taffo by showing a typical example of usage on a simple application, achieving a speedup up to 60% at the price of an absolute error of 3.53×10−5. taffo is modular and based on the solid llvm technology, which allows extensibility to improved analysis techniques, and comprehensive support for the most common precision-reduced data types and programming languages. As a result, the taffo technology has been selected as the precision tuning tool of the European Training Network on Approximate Computing

    SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning

    Full text link
    Deploying machine learning models in production may allow adversaries to infer sensitive information about training data. There is a vast literature analyzing different types of inference risks, ranging from membership inference to reconstruction attacks. Inspired by the success of games (i.e., probabilistic experiments) to study security properties in cryptography, some authors describe privacy inference risks in machine learning using a similar game-based style. However, adversary capabilities and goals are often stated in subtly different ways from one presentation to the other, which makes it hard to relate and compose results. In this paper, we present a game-based framework to systematize the body of knowledge on privacy inference risks in machine learning. We use this framework to (1) provide a unifying structure for definitions of inference risks, (2) formally establish known relations among definitions, and (3) to uncover hitherto unknown relations that would have been difficult to spot otherwise.Comment: 20 pages, to appear in 2023 IEEE Symposium on Security and Privac
    corecore